53 research outputs found

    FluTE, a Publicly Available Stochastic Influenza Epidemic Simulation Model

    Get PDF
    Mathematical and computer models of epidemics have contributed to our understanding of the spread of infectious disease and the measures needed to contain or mitigate them. To help prepare for future influenza seasonal epidemics or pandemics, we developed a new stochastic model of the spread of influenza across a large population. Individuals in this model have realistic social contact networks, and transmission and infections are based on the current state of knowledge of the natural history of influenza. The model has been calibrated so that outcomes are consistent with the 1957/1958 Asian A(H2N2) and 2009 pandemic A(H1N1) influenza viruses. We present examples of how this model can be used to study the dynamics of influenza epidemics in the United States and simulate how to mitigate or delay them using pharmaceutical interventions and social distancing measures. Computer simulation models play an essential role in informing public policy and evaluating pandemic preparedness plans. We have made the source code of this model publicly available to encourage its use and further development

    Analysis of the effectiveness of interventions used during the 2009 A/H1N1 influenza pandemic

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Following the emergence of the A/H1N1 2009 influenza pandemic, public health interventions were activated to lessen its potential impact. Computer modelling and simulation can be used to determine the potential effectiveness of the social distancing and antiviral drug therapy interventions that were used at the early stages of the pandemic, providing guidance to public health policy makers as to intervention strategies in future pandemics involving a highly pathogenic influenza strain.</p> <p>Methods</p> <p>An individual-based model of a real community with a population of approximately 30,000 was used to determine the impact of alternative interventions strategies, including those used in the initial stages of the 2009 pandemic. Different interventions, namely school closure and antiviral strategies, were simulated in isolation and in combination to form different plausible scenarios. We simulated epidemics with reproduction numbers R<sub>0</sub>of 1.5, which aligns with estimates in the range 1.4-1.6 determined from the initial outbreak in Mexico.</p> <p>Results</p> <p>School closure of 1 week was determined to have minimal effect on reducing overall illness attack rate. Antiviral drug treatment of 50% of symptomatic cases reduced the attack rate by 6.5%, from an unmitigated rate of 32.5% to 26%. Treatment of diagnosed individuals combined with additional household prophylaxis reduced the final attack rate to 19%. Further extension of prophylaxis to close contacts (in schools and workplaces) further reduced the overall attack rate to 13% and reduced the peak daily illness rate from 120 to 22 per 10,000 individuals. We determined the size of antiviral stockpile required; the ratio of the required number of antiviral courses to population was 13% for the treatment-only strategy, 25% for treatment and household prophylaxis and 40% for treatment, household and extended prophylaxis. Additional simulations suggest that coupling school closure with the antiviral strategies further reduces epidemic impact.</p> <p>Conclusions</p> <p>These results suggest that the aggressive use of antiviral drugs together with extended school closure may substantially slow the rate of influenza epidemic development. These strategies are more rigorous than those actually used during the early stages of the relatively mild 2009 pandemic, and are appropriate for future pandemics that have high morbidity and mortality rates.</p

    Influenza Outbreak during Sydney World Youth Day 2008: The Utility of Laboratory Testing and Case Definitions on Mass Gathering Outbreak Containment

    Get PDF
    BACKGROUND:Influenza causes annual epidemics and often results in extensive outbreaks in closed communities. To minimize transmission, a range of interventions have been suggested. For these to be effective, an accurate and timely diagnosis of influenza is required. This is confirmed by a positive laboratory test result in an individual whose symptoms are consistent with a predefined clinical case definition. However, the utility of these clinical case definitions and laboratory testing in mass gathering outbreaks remains unknown. METHODS AND RESULTS:An influenza outbreak was identified during World Youth Day 2008 in Sydney. From the data collected on pilgrims presenting to a single clinic, a Markov model was developed and validated against the actual epidemic curve. Simulations were performed to examine the utility of different clinical case definitions and laboratory testing strategies for containment of influenza outbreaks. Clinical case definitions were found to have the greatest impact on averting further cases with no added benefit when combined with any laboratory test. Although nucleic acid testing (NAT) demonstrated higher utility than indirect immunofluorescence antigen or on-site point-of-care testing, this effect was lost when laboratory NAT turnaround times was included. The main benefit of laboratory confirmation was limited to identification of true influenza cases amenable to interventions such as antiviral therapy. CONCLUSIONS:Continuous re-evaluation of case definitions and laboratory testing strategies are essential for effective management of influenza outbreaks during mass gatherings

    Seasonal transmission potential and activity peaks of the new influenza A(H1N1): a Monte Carlo likelihood analysis based on human mobility

    Get PDF
    On 11 June the World Health Organization officially raised the phase of pandemic alert (with regard to the new H1N1 influenza strain) to level 6. We use a global structured metapopulation model integrating mobility and transportation data worldwide in order to estimate the transmission potential and the relevant model parameters we used the data on the chronology of the 2009 novel influenza A(H1N1). The method is based on the maximum likelihood analysis of the arrival time distribution generated by the model in 12 countries seeded by Mexico by using 1M computationally simulated epidemics. An extended chronology including 93 countries worldwide seeded before 18 June was used to ascertain the seasonality effects. We found the best estimate R0 = 1.75 (95% CI 1.64 to 1.88) for the basic reproductive number. Correlation analysis allows the selection of the most probable seasonal behavior based on the observed pattern, leading to the identification of plausible scenarios for the future unfolding of the pandemic and the estimate of pandemic activity peaks in the different hemispheres. We provide estimates for the number of hospitalizations and the attack rate for the next wave as well as an extensive sensitivity analysis on the disease parameter values. We also studied the effect of systematic therapeutic use of antiviral drugs on the epidemic timeline. The analysis shows the potential for an early epidemic peak occurring in October/November in the Northern hemisphere, likely before large-scale vaccination campaigns could be carried out. We suggest that the planning of additional mitigation policies such as systematic antiviral treatments might be the key to delay the activity peak inorder to restore the effectiveness of the vaccination programs.Comment: Paper: 29 Pages, 3 Figures and 5 Tables. Supplementary Information: 29 Pages, 5 Figures and 7 Tables. Print version: http://www.biomedcentral.com/1741-7015/7/4

    Quarantine for pandemic influenza control at the borders of small island nations

    Get PDF
    Background: Although border quarantine is included in many influenza pandemic plans, detailed guidelines have yet to be formulated, including considerations for the optimal quarantine length. Motivated by the situation of small island nations, which will probably experience the introduction of pandemic influenza via just one airport, we examined the potential effectiveness of quarantine as a border control measure. Methods: Analysing the detailed epidemiologic characteristics of influenza, the effectiveness of quarantine at the borders of islands was modelled as the relative reduction of the risk of releasing infectious individuals into the community, explicitly accounting for the presence of asymptomatic infected individuals. The potential benefit of adding the use of rapid diagnostic testing to the quarantine process was also considered. Results: We predict that 95% and 99% effectiveness in preventing the release of infectious individuals into the community could be achieved with quarantine periods of longer than 4.7 and 8.6 days, respectively. If rapid diagnostic testing is combined with quarantine, the lengths of quarantine to achieve 95% and 99% effectiveness could be shortened to 2.6 and 5.7 days, respectively. Sensitivity analysis revealed that quarantine alone for 8.7 days or quarantine for 5.7 days combined with using rapid diagnostic testing could prevent secondary transmissions caused by the released infectious individuals for a plausible range of prevalence at the source country (up to 10%) and for a modest number of incoming travellers (up to 8000 individuals). Conclusion: Quarantine atthe borders of island nations could contribute substantially to preventing the arrival of pandemic influenza (or at least delaying the arrival date). For small island nations we recommend consideration of quarantine alone for 9 days or quarantine for 6 days combined with using rapid diagnostic testing (if available). © 2009 Nishiura et al; licensee BioMed Central Ltd.published_or_final_versio

    A systematic review to identify areas of enhancements of pandemic simulation models for operational use at provincial and local levels

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In recent years, computer simulation models have supported development of pandemic influenza preparedness policies. However, U.S. policymakers have raised several <it>concerns </it>about the practical use of these models. In this review paper, we examine the extent to which the current literature already addresses these <it>concerns </it>and identify means of enhancing the current models for higher operational use.</p> <p>Methods</p> <p>We surveyed PubMed and other sources for published research literature on simulation models for influenza pandemic preparedness. We identified 23 models published between 1990 and 2010 that consider single-region (e.g., country, province, city) outbreaks and multi-pronged mitigation strategies. We developed a plan for examination of the literature based on the concerns raised by the policymakers.</p> <p>Results</p> <p>While examining the concerns about the adequacy and validity of data, we found that though the epidemiological data supporting the models appears to be adequate, it should be validated through as many updates as possible during an outbreak. Demographical data must improve its interfaces for access, retrieval, and translation into model parameters. Regarding the concern about credibility and validity of modeling assumptions, we found that the models often simplify reality to reduce computational burden. Such simplifications may be permissible if they do not interfere with the performance assessment of the mitigation strategies. We also agreed with the concern that social behavior is inadequately represented in pandemic influenza models. Our review showed that the models consider only a few social-behavioral aspects including contact rates, withdrawal from work or school due to symptoms appearance or to care for sick relatives, and compliance to social distancing, vaccination, and antiviral prophylaxis. The concern about the degree of accessibility of the models is palpable, since we found three models that are currently accessible by the public while other models are seeking public accessibility. Policymakers would prefer models scalable to any population size that can be downloadable and operable in personal computers. But scaling models to larger populations would often require computational needs that cannot be handled with personal computers and laptops. As a limitation, we state that some existing models could not be included in our review due to their limited available documentation discussing the choice of relevant parameter values.</p> <p>Conclusions</p> <p>To adequately address the concerns of the policymakers, we need continuing model enhancements in critical areas including: updating of epidemiological data during a pandemic, smooth handling of large demographical databases, incorporation of a broader spectrum of social-behavioral aspects, updating information for contact patterns, adaptation of recent methodologies for collecting human mobility data, and improvement of computational efficiency and accessibility.</p

    Early efforts in modeling the incubation period of infectious diseases with an acute course of illness

    Get PDF
    The incubation period of infectious diseases, the time from infection with a microorganism to onset of disease, is directly relevant to prevention and control. Since explicit models of the incubation period enhance our understanding of the spread of disease, previous classic studies were revisited, focusing on the modeling methods employed and paying particular attention to relatively unknown historical efforts. The earliest study on the incubation period of pandemic influenza was published in 1919, providing estimates of the incubation period of Spanish flu using the daily incidence on ships departing from several ports in Australia. Although the study explicitly dealt with an unknown time of exposure, the assumed periods of exposure, which had an equal probability of infection, were too long, and thus, likely resulted in slight underestimates of the incubation period

    Variance in brain volume with advancing age: implications for defining the limits of normality

    Get PDF
    Background: Statistical models of normal ageing brain tissue volumes may support earlier diagnosis of increasingly common, yet still fatal, neurodegenerative diseases. For example, the statistically defined distribution of normal ageing brain tissue volumes may be used as a reference to assess patient volumes. To date, such models were often derived from mean values which were assumed to represent the distributions and boundaries, i.e. percentile ranks, of brain tissue volume. Since it was previously unknown, the objective of the present study was to determine if this assumption was robust, i.e. whether regression models derived from mean values accurately represented the distributions and boundaries of brain tissue volume at older ages. Materials and Methods: We acquired T1-w magnetic resonance (MR) brain images of 227 normal and 219 Alzheimer’s disease (AD) subjects (aged 55-89 years) from publicly available databanks. Using nonlinear regression within both samples, we compared mean and percentile rank estimates of whole brain tissue volume by age. Results: In both the normal and AD sample, mean regression estimates of brain tissue volume often did not accurately represent percentile rank estimates (errors=-74% to 75%). In the normal sample, mean estimates generally underestimated differences in brain volume at percentile ranks below the mean. Conversely, in the AD sample, mean estimates generally underestimated differences in brain volume at percentile ranks above the mean. Differences between ages at the 5th percentile rank of normal subjects were ~39% greater than mean differences in the AD subjects. Conclusions: While more data are required to make true population inferences, our results indicate that mean regression estimates may not accurately represent the distributions of ageing brain tissue volumes. This suggests that percentile rank estimates will be required to robustly define the limits of brain tissue volume in normal ageing and neurodegenerative disease

    Use of brain MRI atlases to determine boundaries of age-related pathology: the importance of statistical method

    Get PDF
    Neurodegenerative disease diagnoses may be supported by the comparison of an individual patient's brain magnetic resonance image (MRI) with a voxel-based atlas of normal brain MRI. Most current brain MRI atlases are of young to middle-aged adults and parametric, e.g., mean ± standard deviation (SD); these atlases require data to be Gaussian. Brain MRI data, e.g., grey matter (GM) proportion images, from normal older subjects are apparently not Gaussian. We created a nonparametric and a parametric atlas of the normal limits of GM proportions in older subjects and compared their classifications of GM proportions in Alzheimer's disease (AD) patients.Using publicly available brain MRI from 138 normal subjects and 138 subjects diagnosed with AD (all 55-90 years), we created: a mean ± SD atlas to estimate parametrically the percentile ranks and limits of normal ageing GM; and, separately, a nonparametric, rank order-based GM atlas from the same normal ageing subjects. GM images from AD patients were then classified with respect to each atlas to determine the effect statistical distributions had on classifications of proportions of GM in AD patients.The parametric atlas often defined the lower normal limit of the proportion of GM to be negative (which does not make sense physiologically as the lowest possible proportion is zero). Because of this, for approximately half of the AD subjects, 25-45% of voxels were classified as normal when compared to the parametric atlas; but were classified as abnormal when compared to the nonparametric atlas. These voxels were mainly concentrated in the frontal and occipital lobes.To our knowledge, we have presented the first nonparametric brain MRI atlas. In conditions where there is increasing variability in brain structure, such as in old age, nonparametric brain MRI atlases may represent the limits of normal brain structure more accurately than parametric approaches. Therefore, we conclude that the statistical method used for construction of brain MRI atlases should be selected taking into account the population and aim under study. Parametric methods are generally robust for defining central tendencies, e.g., means, of brain structure. Nonparametric methods are advisable when studying the limits of brain structure in ageing and neurodegenerative disease
    corecore